AI Increases The Risk Of Nuclear Annihilation

Authored by John Mac Ghlionn via The Epoch Times,

OpenAI, the company responsible for ChatGPT, recently announced the creation of a new team with a very specific task: to stop AI models from posing “catastrophic risks” to humanity.

Preparedness, the aptly titled team, will be overseen by Aleksander Madry, a machine-learning expert and Massachusetts Institute of Technology-affiliated researcher. Mr. Madry and his team will focus on various threats, most notably those of “chemical, biological, radiological and nuclear” variety. These might seem like far-fetched threats—but they really shouldn’t.

As the United Nations reported earlier this year, the risk of countries turning to nuclear weapons is at its highest point since the Cold War. This report was published before the horrific events that occurred in Israel on Oct. 7. A close ally of Vladimir Putin’s, Nikolai Patrushev, recently suggested that the “destructive” policies of “the United States and its allies were increasing the risk that nuclear, chemical or biological weapons would be used,” according to Reuters.

Merge AI with the above weapons, particularly nuclear weapons, cautions Zachary Kallenborn, a research affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), and you have a recipe for unmitigated disaster.

Mr. Kallenborn has sounded the alarm, repeatedly and unapologetically, on the unholy alliance between AI and nuclear weapons. Not one to mince words, the researcher warned, “If artificial intelligences controlled nuclear weapons, all of us could be dead.”

He isn’t exaggerating. Exactly 40 years ago, as Mr. Kallenborn, a policy fellow at the Schar School of Policy and Government, described, Stanislav Petrov, a Soviet Air Defense Forces lieutenant colonel, was busy monitoring his country’s nuclear warning systems. All of a sudden, according to Mr. Kallenborn, “the computer concluded with the highest confidence that the United States had launched a nuclear war.” Mr. Petrov, however, was skeptical, largely because he didn’t trust the current detection system. Moreover, the radar system lacked corroborative evidence.

Thankfully, Mr. Petrov concluded that the message was a false positive and opted against taking action. Spoiler alert: The computer was completely wrong, and the Russian was completely right.

“But,” noted Mr. Kallenborn, a national security consultant, “if Petrov had been a machine, programmed to respond automatically when confidence was sufficiently high, that error would have started a nuclear war.”

Furthermore, he suggested, there’s absolutely “no guarantee” that certain countries “won’t put AI in charge of nuclear launches,” because international law “doesn’t specify that there should always be a ‘Petrov’ guarding the button.”

“That’s something that should change, soon,” Mr. Kallenborn said.

He told me that AI is already reshaping the future of warfare.

Artificial intelligence, according to Mr. Kallenborn, “can help militaries quickly and more effectively process vast amounts of data generated by the battlefield; make the defense industrial base more effective and efficient at producing weapons at scale, and may be able to improve weapons targeting and decision-making.”

Take China, arguably the biggest threat to the United States, for example, and its AI-powered military applications. According to a report out of Georgetown University, in the not-so-distant future, Beijing may use AI not just to assist during wartime but to actually oversee all acts of warfare.

This should concern all readers.

“If the launch of nuclear weapons is delegated to an autonomous system,” Mr. Kallenborn fears that they “could be launched in error, leading to an accidental nuclear war.”

“Adding AI into nuclear command and control,” he said, “may also lead to misleading or bad information.”

He’s right. AI depends on data, and sometimes data are wildly inaccurate.

Although there isn’t one particular country that keeps Mr. Kallenborn awake at night, he’s worried by “the possibility of Russian President Vladimir Putin using small nuclear weapons in the Ukraine conflict.” Even limited nuclear usage “would be quite bad over the long-term” because “the nuclear taboo” would be removed, thus “encouraging other states to be more cavalier with nuclear weapons usage.”

“Nuclear weapons,” according to Mr. Kallenborn, are the “biggest threat to humanity.”

“They are the only weapon in existence that can cause enough harm to truly cause human extinction,” he said.

As mentioned earlier, throwing AI into the nuclear mix appears to increase the risk of mass extinction. The warnings of Mr. Kallenborn, a well-respected researcher who has dedicated years of his life to researching the evolution of nuclear warfare, carry a great deal of credibility.

As an Amazon Associate I Earn from Qualifying Purchases
-----------------------------------------------------
It is my sincere desire to provide readers of this site with the best unbiased information available, and a forum where it can be discussed openly, as our Founders intended. But it is not easy nor inexpensive to do so, especially when those who wish to prevent us from making the truth known, attack us without mercy on all fronts on a daily basis. So each time you visit the site, I would ask that you consider the value that you receive and have received from The Burning Platform and the community of which you are a vital part. I can't do it all alone, and I need your help and support to keep it alive. Please consider contributing an amount commensurate to the value that you receive from this site and community, or even by becoming a sustaining supporter through periodic contributions. [Burning Platform LLC - PO Box 1520 Kulpsville, PA 19443] or Paypal

-----------------------------------------------------
To donate via Stripe, click here.
-----------------------------------------------------
Use promo code ILMF2, and save up to 66% on all MyPillow purchases. (The Burning Platform benefits when you use this promo code.)
Click to visit the TBP Store for Great TBP Merchandise
Subscribe
Notify of
guest
9 Comments
zappalives
zappalives
December 15, 2023 9:13 am

More fairy tales from faggots in SF/Silicone valley.
Only SUCKERS and democrats buy in to this FAKE happy horseshit.
Uncle Ted was right about computers……….the WEAK-WILLED sheep are easily controlled.

The Central Scrutinizer
The Central Scrutinizer
  zappalives
December 15, 2023 9:52 am

I take great comfort in the fact that reality is going to ass fuck you every day of your painfully long life.

Anonymous
Anonymous
December 15, 2023 9:54 am

Al based on the premise that smashing metals together can initiate nuclear fission.

Anonymous
Anonymous
  Anonymous
December 15, 2023 10:21 am

Why is critical mass prevented in reactors?

The Central Scrutinizer
The Central Scrutinizer
  Anonymous
December 15, 2023 12:44 pm

cooling

Anonymous
Anonymous
  Anonymous
December 15, 2023 7:45 pm

Because critical mass is a lie.

Nuke reactors work via refined uranium decay heat.
Decay heat can be slowed with control rods.
That’s the entire magic.

Truth=Freedom
Truth=Freedom
  Anonymous
December 15, 2023 8:13 pm

Agreed. Everyone who reads this article should do research on nuclear weapons hoax. Not sure how Steve hasn’t figured this out yet, and many other truths. This soccer mom has figured out a lot of lies that are meant to control and deceive us all. Not really that hard either. Take care!

Asstro Buoy
Asstro Buoy
December 15, 2023 1:58 pm

You should take AI seriously because it stands for human Arrogance and Ignorance.

Thought you knew that?

WTF
WTF
December 16, 2023 8:59 am

Just another article making the Chinese out to be the boogyman. Assholes. Why would the Chinese want to kill their main source of income? And the stupid sheep just lap up the MSM lies.